300 research outputs found

    The Computational Power of Beeps

    Full text link
    In this paper, we study the quantity of computational resources (state machine states and/or probabilistic transition precision) needed to solve specific problems in a single hop network where nodes communicate using only beeps. We begin by focusing on randomized leader election. We prove a lower bound on the states required to solve this problem with a given error bound, probability precision, and (when relevant) network size lower bound. We then show the bound tight with a matching upper bound. Noting that our optimal upper bound is slow, we describe two faster algorithms that trade some state optimality to gain efficiency. We then turn our attention to more general classes of problems by proving that once you have enough states to solve leader election with a given error bound, you have (within constant factors) enough states to simulate correctly, with this same error bound, a logspace TM with a constant number of unary input tapes: allowing you to solve a large and expressive set of problems. These results identify a key simplicity threshold beyond which useful distributed computation is possible in the beeping model.Comment: Extended abstract to appear in the Proceedings of the International Symposium on Distributed Computing (DISC 2015

    Fast Consensus under Eventually Stabilizing Message Adversaries

    Full text link
    This paper is devoted to deterministic consensus in synchronous dynamic networks with unidirectional links, which are under the control of an omniscient message adversary. Motivated by unpredictable node/system initialization times and long-lasting periods of massive transient faults, we consider message adversaries that guarantee periods of less erratic message loss only eventually: We present a tight bound of 2D+12D+1 for the termination time of consensus under a message adversary that eventually guarantees a single vertex-stable root component with dynamic network diameter DD, as well as a simple algorithm that matches this bound. It effectively halves the termination time 4D+14D+1 achieved by an existing consensus algorithm, which also works under our message adversary. We also introduce a generalized, considerably stronger variant of our message adversary, and show that our new algorithm, unlike the existing one, still works correctly under it.Comment: 13 pages, 5 figures, updated reference

    Correlated Multiphoton Holes

    Full text link
    We generate bipartite states of light which exhibit an absence of multiphoton coincidence events between two modes amid a constant background flux. These `correlated photon holes' are produced by mixing a coherent state and relatively weak spontaneous parametric down-conversion using a balanced beamsplitter. Correlated holes with arbitrarily high photon numbers may be obtained by adjusting the relative phase and amplitude of the inputs. We measure states of up to five photons and verify their nonclassicality. The scheme provides a route for observation of high-photon-number nonclassical correlations without requiring intense quantum resources.Comment: 4 pages, 3 figures, comments are welcom

    Quiescent consistency: Defining and verifying relaxed linearizability

    Get PDF
    Concurrent data structures like stacks, sets or queues need to be highly optimized to provide large degrees of parallelism with reduced contention. Linearizability, a key consistency condition for concurrent objects, sometimes limits the potential for optimization. Hence algorithm designers have started to build concurrent data structures that are not linearizable but only satisfy relaxed consistency requirements. In this paper, we study quiescent consistency as proposed by Shavit and Herlihy, which is one such relaxed condition. More precisely, we give the first formal definition of quiescent consistency, investigate its relationship with linearizability, and provide a proof technique for it based on (coupled) simulations. We demonstrate our proof technique by verifying quiescent consistency of a (non-linearizable) FIFO queue built using a diffraction tree. © 2014 Springer International Publishing Switzerland

    Space-Time Tradeoffs for Distributed Verification

    Full text link
    Verifying that a network configuration satisfies a given boolean predicate is a fundamental problem in distributed computing. Many variations of this problem have been studied, for example, in the context of proof labeling schemes (PLS), locally checkable proofs (LCP), and non-deterministic local decision (NLD). In all of these contexts, verification time is assumed to be constant. Korman, Kutten and Masuzawa [PODC 2011] presented a proof-labeling scheme for MST, with poly-logarithmic verification time, and logarithmic memory at each vertex. In this paper we introduce the notion of a tt-PLS, which allows the verification procedure to run for super-constant time. Our work analyzes the tradeoffs of tt-PLS between time, label size, message length, and computation space. We construct a universal tt-PLS and prove that it uses the same amount of total communication as a known one-round universal PLS, and tt factor smaller labels. In addition, we provide a general technique to prove lower bounds for space-time tradeoffs of tt-PLS. We use this technique to show an optimal tradeoff for testing that a network is acyclic (cycle free). Our optimal tt-PLS for acyclicity uses label size and computation space O((logn)/t)O((\log n)/t). We further describe a recursive O(logn)O(\log^* n) space verifier for acyclicity which does not assume previous knowledge of the run-time tt.Comment: Pre-proceedings version of paper presented at the 24th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2017

    Insights into the Fallback Path of Best-Effort Hardware Transactional Memory Systems

    Get PDF
    DOI 10.1007/978-3-319-43659-3Current industry proposals for Hardware Transactional Memory (HTM) focus on best-effort solutions (BE-HTM) where hardware limits are imposed on transactions. These designs may show a significant performance degradation due to high contention scenarios and different hardware and operating system limitations that abort transactions, e.g. cache overflows, hardware and software exceptions, etc. To deal with these events and to ensure forward progress, BE-HTM systems usually provide a software fallback path to execute a lock-based version of the code. In this paper, we propose a hardware implementation of an irrevocability mechanism as an alternative to the software fallback path to gain insight into the hardware improvements that could enhance the execution of such a fallback. Our mechanism anticipates the abort that causes the transaction serialization, and stalls other transactions in the system so that transactional work loss is mini- mized. In addition, we evaluate the main software fallback path approaches and propose the use of ticket locks that hold precise information of the number of transactions waiting to enter the fallback. Thus, the separation of transactional and fallback execution can be achieved in a precise manner. The evaluation is carried out using the Simics/GEMS simulator and the complete range of STAMP transactional suite benchmarks. We obtain significant performance benefits of around twice the speedup and an abort reduction of 50% over the software fallback path for a number of benchmarks.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Beeping a Maximal Independent Set

    Full text link
    We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possible to find an MIS in O(log^3 n) time. Second, if we assume sleeping nodes are awoken by neighboring beeps, then we can also find an MIS in O(log^3 n) time. Third, if in addition to this wakeup assumption we allow sender-side collision detection, that is, beeping nodes can distinguish whether at least one neighboring node is beeping concurrently or not, we can find an MIS in O(log^2 n) time. Finally, if instead we endow nodes with synchronous clocks, it is also possible to find an MIS in O(log^2 n) time.Comment: arXiv admin note: substantial text overlap with arXiv:1108.192

    On the Importance of Registers for Computability

    Full text link
    All consensus hierarchies in the literature assume that we have, in addition to copies of a given object, an unbounded number of registers. But why do we really need these registers? This paper considers what would happen if one attempts to solve consensus using various objects but without any registers. We show that under a reasonable assumption, objects like queues and stacks cannot emulate the missing registers. We also show that, perhaps surprisingly, initialization, shown to have no computational consequences when registers are readily available, is crucial in determining the synchronization power of objects when no registers are allowed. Finally, we show that without registers, the number of available objects affects the level of consensus that can be solved. Our work thus raises the question of whether consensus hierarchies which assume an unbounded number of registers truly capture synchronization power, and begins a line of research aimed at better understanding the interaction between read-write memory and the powerful synchronization operations available on modern architectures.Comment: 12 pages, 0 figure

    Relating L-Resilience and Wait-Freedom via Hitting Sets

    Full text link
    The condition of t-resilience stipulates that an n-process program is only obliged to make progress when at least n-t processes are correct. Put another way, the live sets, the collection of process sets such that progress is required if all the processes in one of these sets are correct, are all sets with at least n-t processes. We show that the ability of arbitrary collection of live sets L to solve distributed tasks is tightly related to the minimum hitting set of L, a minimum cardinality subset of processes that has a non-empty intersection with every live set. Thus, finding the computing power of L is NP-complete. For the special case of colorless tasks that allow participating processes to adopt input or output values of each other, we use a simple simulation to show that a task can be solved L-resiliently if and only if it can be solved (h-1)-resiliently, where h is the size of the minimum hitting set of L. For general tasks, we characterize L-resilient solvability of tasks with respect to a limited notion of weak solvability: in every execution where all processes in some set in L are correct, outputs must be produced for every process in some (possibly different) participating set in L. Given a task T, we construct another task T_L such that T is solvable weakly L-resiliently if and only if T_L is solvable weakly wait-free

    Adaptive Stabilization of Reactive Protocols

    Full text link
    Abstract. A self-stabilizing distributed protocol can recover from any state-corrupting fault. A self-stabilizing protocol is called adaptive if its recovery time is proportional to the number of processors hit by the fault. General adaptive protocols are known for the special case of function computations: these are tasks that map static distributed inputs to static distributed outputs. In reactive distributed systems, input values at each node change on-line, and dynamic distributed outputs are to be generated in response in an on-line fashion. To date, only some specific reactive tasks have had an adaptive implementation. In this paper we outline the first proof that all reactive tasks admit adaptive protocols. The key ingredient of the proof is an algorithm for distributing input values in an adaptive fashion. Our algorithm is optimal, up to a constant factor, in its fault resilience, response time, and recovery time.
    corecore